132 research outputs found
Poster: Improving Bug Localization with Report Quality Dynamics and Query Reformulation
Recent findings from a user study suggest that IR-based bug localization
techniques do not perform well if the bug report lacks rich structured
information such as relevant program entity names. On the contrary, excessive
structured information such as stack traces in the bug report might always not
be helpful for the automated bug localization. In this paper, we conduct a
large empirical study using 5,500 bug reports from eight subject systems and
replicating three existing studies from the literature. Our findings (1)
empirically demonstrate how quality dynamics of bug reports affect the
performances of IR-based bug localization, and (2) suggest potential ways
(e.g., query reformulations) to overcome such limitations.Comment: The 40th International Conference on Software Engineering (Companion
volume, Poster Track) (ICSE 2018), pp. 348--349, Gothenburg, Sweden, May,
201
Automatic Prediction of Rejected Edits in Stack Overflow
The content quality of shared knowledge in Stack Overflow (SO) is crucial in
supporting software developers with their programming problems. Thus, SO allows
its users to suggest edits to improve the quality of a post (i.e., question and
answer). However, existing research shows that many suggested edits in SO are
rejected due to undesired contents/formats or violating edit guidelines. Such a
scenario frustrates or demotivates users who would like to conduct good-quality
edits. Therefore, our research focuses on assisting SO users by offering them
suggestions on how to improve their editing of posts. First, we manually
investigate 764 (382 questions + 382 answers) rejected edits by rollbacks and
produce a catalog of 19 rejection reasons. Second, we extract 15 texts and
user-based features to capture those rejection reasons. Third, we develop four
machine learning models using those features. Our best-performing model can
predict rejected edits with 69.1% precision, 71.2% recall, 70.1% F1-score, and
69.8% overall accuracy. Fourth, we introduce an online tool named EditEx that
works with the SO edit system. EditEx can assist users while editing posts by
suggesting the potential causes of rejections. We recruit 20 participants to
assess the effectiveness of EditEx. Half of the participants (i.e., treatment
group) use EditEx and another half (i.e., control group) use the SO standard
edit system to edit posts. According to our experiment, EditEx can support SO
standard edit system to prevent 49% of rejected edits, including the commonly
rejected ones. However, it can prevent 12% rejections even in free-form regular
edits. The treatment group finds the potential rejection reasons identified by
EditEx influential. Furthermore, the median workload suggesting edits using
EditEx is half compared to the SO edit system.Comment: Accepted for publication in Empirical Software Engineering (EMSE)
journa
- …